45 research outputs found

    On the Solution of Linear Programming Problems in the Age of Big Data

    Full text link
    The Big Data phenomenon has spawned large-scale linear programming problems. In many cases, these problems are non-stationary. In this paper, we describe a new scalable algorithm called NSLP for solving high-dimensional, non-stationary linear programming problems on modern cluster computing systems. The algorithm consists of two phases: Quest and Targeting. The Quest phase calculates a solution of the system of inequalities defining the constraint system of the linear programming problem under the condition of dynamic changes in input data. To this end, the apparatus of Fejer mappings is used. The Targeting phase forms a special system of points having the shape of an n-dimensional axisymmetric cross. The cross moves in the n-dimensional space in such a way that the solution of the linear programming problem is located all the time in an "-vicinity of the central point of the cross.Comment: Parallel Computational Technologies - 11th International Conference, PCT 2017, Kazan, Russia, April 3-7, 2017, Proceedings (to be published in Communications in Computer and Information Science, vol. 753

    Convergence and polynomiality of primal-dual interior-point algorithms for linear programming with selective addition of inequalities

    Get PDF
    This paper presents the convergence proof and complexity analysis of an interior-point framework that solves linear programming problems by dynamically selecting and adding relevant inequalities. First, we formulate a new primal–dual interior-point algorithm for solving linear programmes in non-standard form with equality and inequality constraints. The algorithm uses a primal–dual path-following predictor–corrector short-step interior-point method that starts with a reduced problem without any inequalities and selectively adds a given inequality only if it becomes active on the way to optimality. Second, we prove convergence of this algorithm to an optimal solution at which all inequalities are satisfied regardless of whether they have been added by the algorithm or not. We thus provide a theoretical foundation for similar schemes already used in practice. We also establish conditions under which the complexity of such algorithm is polynomial in the problem dimension and address remaining limitations without these conditions for possible further research

    Bounds on the Complexity of Halfspace Intersections when the Bounded Faces have Small Dimension

    Full text link
    We study the combinatorial complexity of D-dimensional polyhedra defined as the intersection of n halfspaces, with the property that the highest dimension of any bounded face is much smaller than D. We show that, if d is the maximum dimension of a bounded face, then the number of vertices of the polyhedron is O(n^d) and the total number of bounded faces of the polyhedron is O(n^d^2). For inputs in general position the number of bounded faces is O(n^d). For any fixed d, we show how to compute the set of all vertices, how to determine the maximum dimension of a bounded face of the polyhedron, and how to compute the set of bounded faces in polynomial time, by solving a polynomial number of linear programs

    Integrality gaps of integer knapsack problems

    Get PDF
    We obtain optimal lower and upper bounds for the (additive) integrality gaps of integer knapsack problems. In a randomised setting, we show that the integrality gap of a “typical” knapsack problem is drastically smaller than the integrality gap that occurs in a worst case scenario

    Advances in low-memory subgradient optimization

    Get PDF
    One of the main goals in the development of non-smooth optimization is to cope with high dimensional problems by decomposition, duality or Lagrangian relaxation which greatly reduces the number of variables at the cost of worsening differentiability of objective or constraints. Small or medium dimensionality of resulting non-smooth problems allows to use bundle-type algorithms to achieve higher rates of convergence and obtain higher accuracy, which of course came at the cost of additional memory requirements, typically of the order of n2, where n is the number of variables of non-smooth problem. However with the rapid development of more and more sophisticated models in industry, economy, finance, et all such memory requirements are becoming too hard to satisfy. It raised the interest in subgradient-based low-memory algorithms and later developments in this area significantly improved over their early variants still preserving O(n) memory requirements. To review these developments this chapter is devoted to the black-box subgradient algorithms with the minimal requirements for the storage of auxiliary results, which are necessary to execute these algorithms. To provide historical perspective this survey starts with the original result of N.Z. Shor which opened this field with the application to the classical transportation problem. The theoretical complexity bounds for smooth and non-smooth convex and quasi-convex optimization problems are briefly exposed in what follows to introduce to the relevant fundamentals of non-smooth optimization. Special attention in this section is given to the adaptive step-size policy which aims to attain lowest complexity bounds. Unfortunately the non-differentiability of objective function in convex optimization essentially slows down the theoretical low bounds for the rate of convergence in subgradient optimization compared to the smooth case but there are different modern techniques that allow to solve non-smooth convex optimization problems faster then dictate lower complexity bounds. In this work the particular attention is given to Nesterov smoothing technique, Nesterov Universal approach, and Legendre (saddle point) representation approach. The new results on Universal Mirror Prox algorithms represent the original parts of the survey. To demonstrate application of non-smooth convex optimization algorithms for solution of huge-scale extremal problems we consider convex optimization problems with non-smooth functional constraints and propose two adaptive Mirror Descent methods. The first method is of primal-dual variety and proved to be optimal in terms of lower oracle bounds for the class of Lipschitz-continuous convex objective and constraints. The advantages of application of this method to sparse Truss Topology Design problem are discussed in certain details. The second method can be applied for solution of convex and quasi-convex optimization problems and is optimal in a sense of complexity bounds. The conclusion part of the survey contains the important references that characterize recent developments of non-smooth convex optimization

    Reducing Bayesian Mechanism Design to Algorithm Design

    No full text

    On a Variant of the Ellipsoid Method: Using Simplices instead of Ellipsoids

    No full text
    corecore